89 research outputs found

    Foundation Models in Smart Agriculture: Basics, Opportunities, and Challenges

    Full text link
    The past decade has witnessed the rapid development of ML and DL methodologies in agricultural systems, showcased by great successes in variety of agricultural applications. However, these conventional ML/DL models have certain limitations: They heavily rely on large, costly-to-acquire labeled datasets for training, require specialized expertise for development and maintenance, and are mostly tailored for specific tasks, thus lacking generalizability. Recently, foundation models have demonstrated remarkable successes in language and vision tasks across various domains. These models are trained on a vast amount of data from multiple domains and modalities. Once trained, they can accomplish versatile tasks with just minor fine-tuning and minimal task-specific labeled data. Despite their proven effectiveness and huge potential, there has been little exploration of applying FMs to agriculture fields. Therefore, this study aims to explore the potential of FMs in the field of smart agriculture. In particular, we present conceptual tools and technical background to facilitate the understanding of the problem space and uncover new research directions in this field. To this end, we first review recent FMs in the general computer science domain and categorize them into four categories: language FMs, vision FMs, multimodal FMs, and reinforcement learning FMs. Subsequently, we outline the process of developing agriculture FMs and discuss their potential applications in smart agriculture. We also discuss the unique challenges associated with developing AFMs, including model training, validation, and deployment. Through this study, we contribute to the advancement of AI in agriculture by introducing AFMs as a promising paradigm that can significantly mitigate the reliance on extensive labeled datasets and enhance the efficiency, effectiveness, and generalization of agricultural AI systems.Comment: 16 pages, 2 figure

    Random Walk and Graph Cut for Co-Segmentation of Lung Tumor on PET-CT Images

    Full text link

    Multiple mesodermal lineage differentiation of Apodemus sylvaticus embryonic stem cells in vitro

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Embryonic stem (ES) cells have attracted significant attention from researchers around the world because of their ability to undergo indefinite self-renewal and produce derivatives from the three cell lineages, which has enormous value in research and clinical applications. Until now, many ES cell lines of different mammals have been established and studied. In addition, recently, AS-ES1 cells derived from <it>Apodemus sylvaticus </it>were established and identified by our laboratory as a new mammalian ES cell line. Hence further research, in the application of AS-ES1 cells, is warranted.</p> <p>Results</p> <p>Herein we report the generation of multiple mesodermal AS-ES1 lineages via embryoid body (EB) formation by the hanging drop method and the addition of particular reagents and factors for induction at the stage of EB attachment. The AS-ES1 cells generated separately in vitro included: adipocytes, osteoblasts, chondrocytes and cardiomyocytes. Histochemical staining, immunofluorescent staining and RT-PCR were carried out to confirm the formation of multiple mesodermal lineage cells.</p> <p>Conclusions</p> <p>The appropriate reagents and culture milieu used in mesodermal differentiation of mouse ES cells also guide the differentiation of in vitro AS-ES1 cells into distinct mesoderm-derived cells. This study provides a better understanding of the characteristics of AS-ES1 cells, a new species ES cell line and promotes the use of Apodemus ES cells as a complement to mouse ES cells in future studies.</p

    The immunosuppressive effects and mechanisms of loureirin B on collagen-induced arthritis in rats

    Get PDF
    IntroductionRheumatoid arthritis (RA) is a common disease mainly affecting joints of the hands and wrists. The discovery of autoantibodies in the serum of patients revealed that RA belonged to the autoimmune diseases and laid a theoretical basis for its immunosuppressive therapy. The pathogenesis of autoimmune diseases mainly involves abnormal activation and proliferation of effector memory T cells, which is closely related to the elevated expression of Kv1.3, a voltage-gated potassium (Kv) channel on the effector memory T cell membrane. Drugs blocking the Kv1.3 channel showed a strong protective effect in RA model animals, suggesting that Kv1.3 is a target for the discovery of specific RA immunosuppressive drugs.MethodsIn the present study, we synthesized LrB and studied the effects of LrB on collagen- induced arthritis (CIA) in rats. The clinical score, paw volume and joint morphology of CIA model rats were compared. The percentage of CD3+, CD4+ and CD8+ T cells in rat peripheral blood mononuclear and spleen were analyzed with flow cytometry. The concentrations of inflammatory cytokines interleukin (IL)-1b, IL-2, IL-4, IL-6, IL-10 and IL-17 in the serum of CIA rats were analyzed with enzyme-linked immunosorbent assay. The IL-1b and IL-6 expression in joints and the Kv1.3 expression in peripheral blood mononuclear cells (PBMCs) were quantified by qPCR. To further study the mechanisms of immunosuppressive effects of LrB, western blot and immunofluorescence were utilized to study the expression of Kv1.3 and Nuclear Factor of Activated T Cells 1 (NFAT1) in two cell models - Jurkat T cell line and extracted PBMCs.ResultsLrB effectively reduced the clinical score and relieved joint swelling. LrB could also decrease the percentage of CD4+ T cells, while increase the percentage of CD8+ T cells in peripheral blood mononuclear and spleen of rats with CIA. The concentrations of inflammatory cytokines interleukin (IL)-1b, IL-2, IL-6, IL-10 and IL-17 in the serum of CIA rats were significantly reduced by LrB. The results of qPCR showed that Kv1.3 mRNA in the PBMCs of CIA rats was significantly higher than that of the control and significantly decreased in the LrB treatment groups. In addition, we confirmed in cell models that LrB significantly decreased Kv1.3 protein on the cell membrane and inhibited the activation of Nuclear Factor of Activated T Cells 1 (NFAT1) with immune stimulus.ConclusionIn summary, this study revealed that LrB could block NFAT1 activation and reduce Kv1.3 expression in activated T cells, thus inhibiting the proliferation of lymphocytes and the release of inflammatory cytokines, thereby effectively weakening the autoimmune responses in CIA rats. The effects of immunosuppression due to LrB revealed its potential medicinal value in the treatment of RA

    High-throughput robotic plant phenotyping using 3D machine vision and deep neural networks

    No full text
    The ability to correlate morphological traits of plants with their genotypes plays an important role in plant phenomics research. However, traditional plant phenotyping is time-consuming, labor-intensive, and prone to human errors. This dissertation documents my innovative research in high-throughput robotic plant phenotyping for sorghum and maize plants using 3D machine vision and convolutional neural networks. Sorghum is an important grain crop and a promising feedstock for biofuel production due to its excellent drought tolerance and water use efficiency. The 3D surface model of a plant can potentially provide an efficient and accurate way to digitize plant architecture and accelerate sorghum plant breeding programs. A non-destructive 3D scanning system using a commodity depth camera was developed to take side-view images of plants at multiple growth stages. A 3D skeletonization algorithm was developed to analyze the plant architecture and segment individual leaves. Multiple phenotypic parameters were obtained from the skeleton and the reconstructed point cloud including plant height, stem diameter, leaf angle, and leaf surface area. These image-derived features were highly correlated with the ground truth. Additionally, the results showed that stem volume was a promising predictor of shoot fresh weight and shoot dry weight. To address the challenges of in-field imaging for plant phenotyping caused by variable outdoor lighting, wind conditions, and occlusions of plants. A customized stereo module, namely PhenoStereo, was developed for acquiring high-quality image data under field conditions. PhenoStereo was used to acquire a set of sorghum plant images and an automated point cloud data processing pipeline was also developed to automatically extract the stems and then quantify their diameters via an optimized 3D modeling process. The pipeline employed a Mask Region Convolutional Neural Network for detecting stalk contours and a Semi-Global Block Matching stereo matching algorithm for generating disparity maps. The system-derived stem diameters were highly correlated with the ground truth. Additionally, PhenoStereo was used to quantify the leaf angle of maize plants under field conditions. Multiple tiers of PhenoStereo camera were mounted on PhenoBot 3.0, a robotic vehicle designed to traverse between pairs of agronomically spaced rows of crops, to capture side-view images of maize plants in the field. An automated image processing pipeline (AngleNet) was developed to detect each leaf angle as a triplet of keypoints in two-dimensional images and extract quantitative data from reconstructed 3D models. AngleNet-derived leaf angles and their associated internode heights were highly correlated with manually collected ground-truth measurements. The dissertation investigates and develops automated computer-vision-based robotic systems for plant phenotyping under controlled environments and in field conditions. In particular, a stereo module was customized and utilized to acquire high-quality image data for in-field plant phenotyping. With high-fidelity reconstructed 3D models and robust image processing algorithms, a series of plant-level and organ-level phenotypic traits of sorghum and maize plants were accurately extracted. The results demonstrated that with proper customization stereo vision can be a highly desirable sensing method for field-based plant phenotyping using high-fidelity 3D models reconstructed from stereoscopic images. The proposed approaches in this dissertation provide efficient alternatives to traditional phenotyping that could potentially accelerate breeding programs for improved plant architecture

    High-throughput robotic plant phenotyping using 3D machine vision and deep neural networks

    No full text
    The ability to correlate morphological traits of plants with their genotypes plays an important role in plant phenomics research. However, traditional plant phenotyping is time-consuming, labor-intensive, and prone to human errors. This dissertation documents my innovative research in high-throughput robotic plant phenotyping for sorghum and maize plants using 3D machine vision and convolutional neural networks. Sorghum is an important grain crop and a promising feedstock for biofuel production due to its excellent drought tolerance and water use efficiency. The 3D surface model of a plant can potentially provide an efficient and accurate way to digitize plant architecture and accelerate sorghum plant breeding programs. A non-destructive 3D scanning system using a commodity depth camera was developed to take side-view images of plants at multiple growth stages. A 3D skeletonization algorithm was developed to analyze the plant architecture and segment individual leaves. Multiple phenotypic parameters were obtained from the skeleton and the reconstructed point cloud including plant height, stem diameter, leaf angle, and leaf surface area. These image-derived features were highly correlated with the ground truth. Additionally, the results showed that stem volume was a promising predictor of shoot fresh weight and shoot dry weight. To address the challenges of in-field imaging for plant phenotyping caused by variable outdoor lighting, wind conditions, and occlusions of plants. A customized stereo module, namely PhenoStereo, was developed for acquiring high-quality image data under field conditions. PhenoStereo was used to acquire a set of sorghum plant images and an automated point cloud data processing pipeline was also developed to automatically extract the stems and then quantify their diameters via an optimized 3D modeling process. The pipeline employed a Mask Region Convolutional Neural Network for detecting stalk contours and a Semi-Global Block Matching stereo matching algorithm for generating disparity maps. The system-derived stem diameters were highly correlated with the ground truth. Additionally, PhenoStereo was used to quantify the leaf angle of maize plants under field conditions. Multiple tiers of PhenoStereo camera were mounted on PhenoBot 3.0, a robotic vehicle designed to traverse between pairs of agronomically spaced rows of crops, to capture side-view images of maize plants in the field. An automated image processing pipeline (AngleNet) was developed to detect each leaf angle as a triplet of keypoints in two-dimensional images and extract quantitative data from reconstructed 3D models. AngleNet-derived leaf angles and their associated internode heights were highly correlated with manually collected ground-truth measurements. The dissertation investigates and develops automated computer-vision-based robotic systems for plant phenotyping under controlled environments and in field conditions. In particular, a stereo module was customized and utilized to acquire high-quality image data for in-field plant phenotyping. With high-fidelity reconstructed 3D models and robust image processing algorithms, a series of plant-level and organ-level phenotypic traits of sorghum and maize plants were accurately extracted. The results demonstrated that with proper customization stereo vision can be a highly desirable sensing method for field-based plant phenotyping using high-fidelity 3D models reconstructed from stereoscopic images. The proposed approaches in this dissertation provide efficient alternatives to traditional phenotyping that could potentially accelerate breeding programs for improved plant architecture

    High-throughput robotic plant phenotyping using 3D machine vision and deep neural networks

    Get PDF
    The ability to correlate morphological traits of plants with their genotypes plays an important role in plant phenomics research. However, traditional plant phenotyping is time-consuming, labor-intensive, and prone to human errors. This dissertation documents my innovative research in high-throughput robotic plant phenotyping for sorghum and maize plants using 3D machine vision and convolutional neural networks. Sorghum is an important grain crop and a promising feedstock for biofuel production due to its excellent drought tolerance and water use efficiency. The 3D surface model of a plant can potentially provide an efficient and accurate way to digitize plant architecture and accelerate sorghum plant breeding programs. A non-destructive 3D scanning system using a commodity depth camera was developed to take side-view images of plants at multiple growth stages. A 3D skeletonization algorithm was developed to analyze the plant architecture and segment individual leaves. Multiple phenotypic parameters were obtained from the skeleton and the reconstructed point cloud including plant height, stem diameter, leaf angle, and leaf surface area. These image-derived features were highly correlated with the ground truth. Additionally, the results showed that stem volume was a promising predictor of shoot fresh weight and shoot dry weight. To address the challenges of in-field imaging for plant phenotyping caused by variable outdoor lighting, wind conditions, and occlusions of plants. A customized stereo module, namely PhenoStereo, was developed for acquiring high-quality image data under field conditions. PhenoStereo was used to acquire a set of sorghum plant images and an automated point cloud data processing pipeline was also developed to automatically extract the stems and then quantify their diameters via an optimized 3D modeling process. The pipeline employed a Mask Region Convolutional Neural Network for detecting stalk contours and a Semi-Global Block Matching stereo matching algorithm for generating disparity maps. The system-derived stem diameters were highly correlated with the ground truth. Additionally, PhenoStereo was used to quantify the leaf angle of maize plants under field conditions. Multiple tiers of PhenoStereo camera were mounted on PhenoBot 3.0, a robotic vehicle designed to traverse between pairs of agronomically spaced rows of crops, to capture side-view images of maize plants in the field. An automated image processing pipeline (AngleNet) was developed to detect each leaf angle as a triplet of keypoints in two-dimensional images and extract quantitative data from reconstructed 3D models. AngleNet-derived leaf angles and their associated internode heights were highly correlated with manually collected ground-truth measurements. The dissertation investigates and develops automated computer-vision-based robotic systems for plant phenotyping under controlled environments and in field conditions. In particular, a stereo module was customized and utilized to acquire high-quality image data for in-field plant phenotyping. With high-fidelity reconstructed 3D models and robust image processing algorithms, a series of plant-level and organ-level phenotypic traits of sorghum and maize plants were accurately extracted. The results demonstrated that with proper customization stereo vision can be a highly desirable sensing method for field-based plant phenotyping using high-fidelity 3D models reconstructed from stereoscopic images. The proposed approaches in this dissertation provide efficient alternatives to traditional phenotyping that could potentially accelerate breeding programs for improved plant architecture

    High-throughput robotic plant phenotyping using 3D machine vision and deep neural networks

    No full text
    The ability to correlate morphological traits of plants with their genotypes plays an important role in plant phenomics research. However, traditional plant phenotyping is time-consuming, labor-intensive, and prone to human errors. This dissertation documents my innovative research in high-throughput robotic plant phenotyping for sorghum and maize plants using 3D machine vision and convolutional neural networks. Sorghum is an important grain crop and a promising feedstock for biofuel production due to its excellent drought tolerance and water use efficiency. The 3D surface model of a plant can potentially provide an efficient and accurate way to digitize plant architecture and accelerate sorghum plant breeding programs. A non-destructive 3D scanning system using a commodity depth camera was developed to take side-view images of plants at multiple growth stages. A 3D skeletonization algorithm was developed to analyze the plant architecture and segment individual leaves. Multiple phenotypic parameters were obtained from the skeleton and the reconstructed point cloud including plant height, stem diameter, leaf angle, and leaf surface area. These image-derived features were highly correlated with the ground truth. Additionally, the results showed that stem volume was a promising predictor of shoot fresh weight and shoot dry weight. To address the challenges of in-field imaging for plant phenotyping caused by variable outdoor lighting, wind conditions, and occlusions of plants. A customized stereo module, namely PhenoStereo, was developed for acquiring high-quality image data under field conditions. PhenoStereo was used to acquire a set of sorghum plant images and an automated point cloud data processing pipeline was also developed to automatically extract the stems and then quantify their diameters via an optimized 3D modeling process. The pipeline employed a Mask Region Convolutional Neural Network for detecting stalk contours and a Semi-Global Block Matching stereo matching algorithm for generating disparity maps. The system-derived stem diameters were highly correlated with the ground truth. Additionally, PhenoStereo was used to quantify the leaf angle of maize plants under field conditions. Multiple tiers of PhenoStereo camera were mounted on PhenoBot 3.0, a robotic vehicle designed to traverse between pairs of agronomically spaced rows of crops, to capture side-view images of maize plants in the field. An automated image processing pipeline (AngleNet) was developed to detect each leaf angle as a triplet of keypoints in two-dimensional images and extract quantitative data from reconstructed 3D models. AngleNet-derived leaf angles and their associated internode heights were highly correlated with manually collected ground-truth measurements. The dissertation investigates and develops automated computer-vision-based robotic systems for plant phenotyping under controlled environments and in field conditions. In particular, a stereo module was customized and utilized to acquire high-quality image data for in-field plant phenotyping. With high-fidelity reconstructed 3D models and robust image processing algorithms, a series of plant-level and organ-level phenotypic traits of sorghum and maize plants were accurately extracted. The results demonstrated that with proper customization stereo vision can be a highly desirable sensing method for field-based plant phenotyping using high-fidelity 3D models reconstructed from stereoscopic images. The proposed approaches in this dissertation provide efficient alternatives to traditional phenotyping that could potentially accelerate breeding programs for improved plant architecture

    A review of three-dimensional vision techniques in food and agriculture applications

    No full text
    In recent years, three-dimensional (3D) machine vision techniques have been widely employed in agriculture and food systems, leveraging advanced deep learning technologies. However, with the rapid development of three-dimensional (3D) imaging techniques, the lack of a systematic review has hindered our ability to identify the most suitable imaging systems for specific agricultural and food applications. In this review, a variety of 3D imaging techniques are introduced, with their working principles and applications in agriculture and food systems. These techniques include Structure lighting-based 3D imaging, Multiview 3D imaging system, Time of Flight (ToF)-based 3D imaging system, Lighting Detection and Ranging (LiDAR), and Depth estimation from monocular image. Furthermore, the three-dimensional image analysis methods applied to these 3D imaging techniques are described and discussed in this review
    corecore